5 research outputs found

    Is the U.S. Legal System Ready for AI's Challenges to Human Values?

    Full text link
    Our interdisciplinary study investigates how effectively U.S. laws confront the challenges posed by Generative AI to human values. Through an analysis of diverse hypothetical scenarios crafted during an expert workshop, we have identified notable gaps and uncertainties within the existing legal framework regarding the protection of fundamental values, such as privacy, autonomy, dignity, diversity, equity, and physical/mental well-being. Constitutional and civil rights, it appears, may not provide sufficient protection against AI-generated discriminatory outputs. Furthermore, even if we exclude the liability shield provided by Section 230, proving causation for defamation and product liability claims is a challenging endeavor due to the intricate and opaque nature of AI systems. To address the unique and unforeseeable threats posed by Generative AI, we advocate for legal frameworks that evolve to recognize new threats and provide proactive, auditable guidelines to industry stakeholders. Addressing these issues requires deep interdisciplinary collaborations to identify harms, values, and mitigation strategies.Comment: 25 pages, 7 figure

    Case Repositories: Towards Case-Based Reasoning for AI Alignment

    Full text link
    Case studies commonly form the pedagogical backbone in law, ethics, and many other domains that face complex and ambiguous societal questions informed by human values. Similar complexities and ambiguities arise when we consider how AI should be aligned in practice: when faced with vast quantities of diverse (and sometimes conflicting) values from different individuals and communities, with whose values is AI to align, and how should AI do so? We propose a complementary approach to constitutional AI alignment, grounded in ideas from case-based reasoning (CBR), that focuses on the construction of policies through judgments on a set of cases. We present a process to assemble such a case repository by: 1) gathering a set of ``seed'' cases -- questions one may ask an AI system -- in a particular domain, 2) eliciting domain-specific key dimensions for cases through workshops with domain experts, 3) using LLMs to generate variations of cases not seen in the wild, and 4) engaging with the public to judge and improve cases. We then discuss how such a case repository could assist in AI alignment, both through directly acting as precedents to ground acceptable behaviors, and as a medium for individuals and communities to engage in moral reasoning around AI.Comment: MP2 workshop @ NeurIPS 202

    Freedom of Algorithmic Expression

    Get PDF
    Can content moderation on social media be considered a form of speech? If so, would government regulation of content moderation violate the First Amendment? These are the main arguments of social media companies after Florida and Texas legislators attempted to restrict social media platforms’ authority to de-platform objectionable content. This article examines whether social media companies’ arguments have valid legal grounds. To this end, the article proposes three elements to determine that algorithms classify as “speech:” (1) the algorithms are designed to communicate messages; (2) the relevant messages reflect cognitive or emotive ideas beyond mere operational matters; and (3) they represent the company’s standpoints. The application of these elements makes it clear that social media algorithms can be considered speech when algorithms are designed to express companies’ values, ethics, and identity (as they often are). However, conceptualizing algorithms as speech does not automatically award a social media company a magic shield against state or federal regulation. It is true that social media platforms’ position is likely to be favored by the U.S. Supreme Court, which has increasingly taken an all-or-nothing approach whereby “all speech invokes strict scrutiny of government regulation.” Instead, this article argues for the restoration of the Court’s approach prior to the 1970s, when decisions emphasized considerations such as the democratic values of speech, the irreplaceability of forums, and the socioeconomic inequality of speakers and audiences. Under the latter principles, social media companies’ market dominance and their harmful effect on juveniles or political polarization would justify legislative efforts to increase algorithmic transparency even if they restrict social media’s free speech. Therefore, most big tech companies’ algorithms can and should be regulated for legitimate government purposes

    Comparing Satisfaction/Dissatisfaction and Public Confidence in the ITS Environment in Public and Private Transportation

    No full text
    Although Intelligent Transport Systems (ITS) is widely used both in public and private transportation, the factors that influence satisfaction and public confidence seem to differ between the two modes of transportation. The purpose of this study is to compare the justice dimensions that influence satisfaction and public confidence in the context of ITS in public and private transportation and to explore implications for Citizen/Customer Relationship Management (CRM) and public policy. This study finds that in public transportation, policy planners should focus on freedom to communicate views and opinions, ease of engaging commuting, adaptability of commuting to reflect individual circumstances, manner and behavior of service personnel, and provision of caring, individual attention towards passengers. In private transportation, policy planners should focus on manner and behavior of the service personnel, and provision of caring, individual attention towards passengers.2
    corecore